Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
نویسنده
چکیده
In this paper, we present a novel Loss-Sensitive GAN (LS-GAN) that learns a loss function to separate generated samplesfrom their real examples. An important property of the LS-GAN is it allows the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus can improvethe overall quality of generated samples. The theoretical analysis also shows that the LS-GAN can generate samples following the true data density. In particular, we present a regularity condition on the underlying data density, which allows us to use a class of Lipschitzlosses and generators to model the LS-GAN. It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. Furthermore, we derive a non-parametric solution that characterizes the upper and lowerbounds of the losses learned by the LS-GAN, both of which are piecewise linear and have non-vanishing gradient almost everywhere.Therefore, there should be sufficient gradient to update the generator of the LS-GAN even if the loss function is optimized, relieving the vanishing gradient problem in the classic GAN and making it easier to train the LS-GAN generator. We also generalize the unsupervised LS-GAN to a conditional model generating samples based on given conditions, and show its applications in bothsupervised and semi-supervised learning problems. The experiment results demonstrate competitive performances on bothclassification and generation tasks.
منابع مشابه
Varying k-Lipschitz Constraint for Generative Adversarial Networks
Kanglin Liu Abstract Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recent proposed Wasserstein GAN with gradient penalty (WGAN-GP) makes progress toward stable training. Gradient penalty acts as the role of enforcing a Lipschitz constraint. Further investigation on gradient penalty shows that gradient penalty may impose restrict...
متن کاملImprovement of generative adversarial networks for automatic text-to-image generation
This research is related to the use of deep learning tools and image processing technology in the automatic generation of images from text. Previous researches have used one sentence to produce images. In this research, a memory-based hierarchical model is presented that uses three different descriptions that are presented in the form of sentences to produce and improve the image. The proposed ...
متن کاملAutomatic Colorization of Grayscale Images Using Generative Adversarial Networks
Automatic colorization of gray scale images poses a unique challenge in Information Retrieval. The goal of this field is to colorize images which have lost some color channels (such as the RGB channels or the AB channels in the LAB color space) while only having the brightness channel available, which is usually the case in a vast array of old photos and portraits. Having the ability to coloriz...
متن کاملOn the regularization of Wasserstein GANs
Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimiza...
متن کاملMTGAN: Speaker Verification through Multitasking Triplet Generative Adversarial Networks
In this paper, we propose an enhanced triplet method that improves the encoding process of embeddings by jointly utilizing generative adversarial mechanism and multitasking optimization. We extend our triplet encoder with Generative Adversarial Networks (GANs) and softmax loss function. GAN is introduced for increasing the generality and diversity of samples, while softmax is for reinforcing fe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1701.06264 شماره
صفحات -
تاریخ انتشار 2017